Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms

نویسنده

  • Xiong Yan
چکیده

This paper investigates an online gradient method with innerpenalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higherorder networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performance of feed forward neural networks and to control the magnitude of the network weights. The monotonicity of the error function and weight boundedness with innerpenalty term and both weak and strong convergence theorems in the training iteration are proved. Keyword: Convergence, Pi-sigma Network, Online Gradient Method, Inner-penalty, Boundedness

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Training Pi-Sigma Network by Online Gradient Algorithm with Penalty for Small Weight Update

A pi-sigma network is a class of feedforward neural networks with product units in the output layer. An online gradient algorithm is the simplest and most often used training method for feedforward neural networks. But there arises a problem when the online gradient algorithm is used for pi-sigma networks in that the update increment of the weights may become very small, especially early in tra...

متن کامل

Convergence of an Online Gradient Algorithm with Penalty for Two-layer Neural Networks

Online gradient algorithm has been widely used as a learning algorithm for feedforward neural networks training. Penalty is a common and popular method for improving the generalization performance of networks. In this paper, a convergence theorem is proved for the online gradient learning algorithm with penalty, a term proportional to the magnitude of the weights. The monotonicity of the error ...

متن کامل

B Atch G Radient M Ethod for T Raining of P I - Sigma N Eural N Etwork with P Enalty

In this letter, we describe a convergence of batch gradient method with a penalty condition term for a narration feed forward neural network called pi-sigma neural network, which employ product cells as the output units to inexplicit amalgamate the capabilities of higher-order neural networks while using a minimal number of weights and processing units. As a rule, the penalty term is condition ...

متن کامل

Convergence of Online Gradient Method with a Penalty Term for Feedforward Neural Networks with Stochastic

Abstract Online gradient algorithm has been widely used as a learning algorithm for feedforward neural network training. In this paper, we prove a weak convergence theorem of an online gradient algorithm with a penalty term, assuming that the training examples are input in a stochastic way. The monotonicity of the error function in the iteration and the boundedness of the weight are both guaran...

متن کامل

Convergence Analysis of Multilayer Feedforward Networks Trained with Penalty Terms: a Review

Gradient descent method is one of the popular methods to train feedforward neural networks. Batch and incremental modes are the two most common methods to practically implement the gradient-based training for such networks. Furthermore, since generalization is an important property and quality criterion of a trained network, pruning algorithms with the addition of regularization terms have been...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016